01 — Context is everything

The gap isn't capability. It's input quality.

The tools your team now has access to — Claude, Copilot, Figma Make, Azure DevOps integrations — are genuinely capable. The question is whether you're using them at 20% of their potential or 80%.

01

You have the tools.

Claude, Copilot, Azure DevOps, Figma Make — the technology is there. The gap isn't capability, it's the quality of input these tools receive. An AI coding assistant given a precise, data-rich requirement with clear acceptance criteria produces scaffolded, testable code. Given a sentence fragment, it produces a guess.

02

The napkin sketch problem.

If you hand a brilliant contractor a napkin sketch and say "build me a house," you'll get a house — but probably not the one you wanted. AI tools work the same way. The intelligence is real, but it is not a substitute for specification. Ambiguous input produces confident-sounding output that may be entirely wrong for your context.

03

In financial software, ambiguity is expensive.

A vague requirement interpreted loosely by an AI coding assistant could mean a workflow that doesn't match financial reality, a compliance gap, or a data integrity issue. Acme Corp's products run in regulated environments. The cost of misinterpretation isn't a UX bug — it can cascade into a SOC 2 Type II deviation or a customer data integrity issue that surfaces during audit.

04

The telephone game is real.

Requirements gathering is your biggest bottleneck, creating downstream impacts across design, development, and validation — with rework cycles 3+ months into projects. Every handoff — from clinical ops to PM, PM to designer, designer to developer, developer to QA — is an opportunity for the original intent to mutate. Structured requirements survive the telephone game. Narrative prose doesn't.

02 — Where Acme Corp is today

An honest look at current practice

This is not a critique — it's a baseline. Understanding where the friction actually lives is the only way to know where AI can genuinely help versus where it would just add noise.

1

Stakeholder constraints are real.

Requirements start as conversations, emails, or voicemails from stakeholders who may have only 30 minutes per week — busy Loan Officers, Credit Analysts, and Risk Managers who are domain experts but not specification writers. You cannot expect a risk operations lead to produce a well-formed user story. That translation work is yours, and AI can help you do it faster.

2

Word documents are the norm.

There is no standard tooling for capturing requirements before they enter Helix ALM. Documents vary in structure, level of detail, and authoring style depending on the team lead. Some requirements are comprehensive. Others are a list of bullets from a Zoom call. An AI extraction step applied consistently to these raw inputs can produce a baseline level of structure before anything is formalized.

3

Late migration to Helix.

Requirements don't enter Helix ALM until near the end of development — by that point, the code is written and the requirement is just documentation of what was built. This inverts the value of a requirements tool. If AI-generated documentation and prototypes can create a feedback loop earlier, requirements can serve their actual purpose: aligning interpretation before work begins, not recording history after it ends.

4

AI tools aren't being fed.

Teams aren't yet using the advantages of AI-generated documentation or GenAI prototypes to rapidly validate design directions sooner. The tools exist and the team has access to them. The missing piece is the input quality that lets them perform. A Copilot or Figma Make session that starts with structured, context-rich requirements produces meaningfully different output than one that starts with the same Word document that's been circulating for three weeks.

What we're NOT doing

Adding bureaucracy. Not asking developers to write formal specifications before a single line of code. Not mandating new tools for their own sake. Not creating a process that requires a two-day workshop before a feature can begin.

What we ARE doing

Showing you how to use AI itself to bridge the gap between unstructured stakeholder input and structured, AI-ready requirements — in minutes, not days. The goal is to reduce the overhead of good practice, not increase it.

03 — The multiplier math

One requirement. Five downstream artifacts.

When AI tools are in your pipeline, the requirement isn't just documentation — it's the seed input for every artifact that follows. Structure it once, and the entire chain benefits.

📋
Requirement
Structured user story with data model + acceptance criteria
Figma Make
UI mockup seeded with real data fields and states
Copilot / Claude
Scaffolded Angular component with correct data types
AI Testing
Generated test cases tied to acceptance criteria
ADO / Helix
Backlog items and work items populated from requirement fields
Vague Requirement

"Allow users to manage loan applications."

The developer interprets it one way, the designer another, QA writes tests against a third interpretation, and you spend your sprint in clarification meetings. Figma Make produces a generic form. Copilot guesses at data types and invents field names. Test cases don't map to real financial scenarios. Rework begins the moment you demo to a stakeholder.

Well-Structured Requirement

"Loan Officer views application status for all 6 branches at a glance, sortable by % to approval target."

One requirement, structured once — AI-generated UI mockups with the right columns, scaffolded Angular table component with correct data types, generated test cases covering the happy path and the "branch at risk" edge case, and populated backlog items. The Figma Make output is reviewable by a Loan Officer the same day it's written. Minimal rework because there's nothing to reinterpret.

The ROI is simple: An extra 15 minutes structuring a requirement up front saves hours of rework when AI tools are in the pipeline. At Acme Corp's cadence, with validation cycles and SOC 2 Type II change control overhead, a single misinterpreted requirement that reaches the validation phase can cost days — not hours.

04 — Applied to LOAN-2024-Q3

The same feature. Two very different requirements.

The Loan Application Review Table for portfolio LOAN-2024-Q3 is the running example throughout this playbook. Here's what the gap between a vague requirement and an AI-ready one looks like in practice — using a real feature, real roles, and real data.

Before — Vague
"The system should allow users to manage loan applications."
  • No specific role context — "users" could mean Loan Officer, Credit Analyst, Risk Manager, or Compliance Officer. Each needs different data and different permissions.
  • No data specifics — Figma Make produces a generic table. Copilot invents field names that don't match the financial data model.
  • No validation rules — what constitutes "on track" vs. "at risk"? What triggers a status change? These decisions get made silently in code.
  • No UI guidance — sortable? Filterable? Exportable? Inline editing or read-only? Each decision is deferred to the developer at implementation time.
  • No compliance angle — does application status require an audit trail? Can it be edited after lock? SOC 2 Type II and PCI-DSS implications go unaddressed.
After — AI-Ready
Story
As a Loan Officer managing portfolio LOAN-2024-Q3, I need to view the application status of all 6 branches at a glance so I can identify branches at risk of falling behind target before the weekly portfolio review call.
Data Fields
Branch ID · Branch Name · Target Approvals · Actual Approvals · % to Target · Last Updated · Status
Status Values
On Track / At Risk / Behind — threshold: At Risk when actual < 85% of target with < 30 days to milestone; Behind when actual < 70%.
UI / Interaction
Sortable table, all columns. Inline filter by status. Export to CSV. Color-blind-safe status indicators: icon + label, never color alone (WCAG 2.1 AA).
  • Figma Make produces a table with the exact six branch rows and correct column headers on first pass.
  • Copilot scaffolds an Angular Material table component with correct TypeScript types and no invented fields.
  • Generated test cases map to defined thresholds — "At Risk" logic is testable because the threshold is explicit.
  • SOC 2 Type II audit trail requirement is flagged at writing time, not discovered during validation.
What this means in practice

The after version isn't longer because a process told someone to write more. It's more specific because it encodes decisions that would have been made anyway — just silently, by whoever touched the feature next. The structured requirement makes those decisions explicit, reviewable, and correctable before anyone writes a line of code or spends two hours in Figma. That's the multiplier: the same decisions, made earlier, with the right people in the room.

05 — Prompt to try

Extract structured requirements from meeting notes

Feed this to Claude immediately after your next stakeholder meeting. Paste your raw notes — whether that's a transcript, a bulleted email, or a stream-of-consciousness summary — and get back structured, AI-ready requirements in under two minutes.

Requirements Extraction Prompt
I have notes from a stakeholder meeting about [feature]. Extract the requirements
as user stories using this template:

Story:       As a [specific role + context], I need to [action] so that [measurable outcome].
Acceptance:  Given / When / Then — testable, covering happy path and edge cases.
Data:        Inputs, outputs, data types, validation rules, data sources.
UI/UX:       Layout, key interactions, states (loading, empty, success, error).
Compliance:  SOC 2 Type II / PCI-DSS considerations, audit trail needs.
Edge Cases:  Boundary conditions, error states, network failures.

Flag any ambiguities and list follow-up questions.

[Paste your meeting notes here]
What to paste in

Raw meeting notes, a bullet-point email from a stakeholder, a Zoom transcript, a voice memo transcription — anything. The prompt handles the mess. You do not need to clean up the input first.

What you get back

One or more structured user stories with all six fields populated, plus a list of flagged ambiguities and specific follow-up questions you can take back to the stakeholder — or resolve yourself if the answer is clear from context.

Next step

Take the structured output directly into Figma Make, Copilot, or your next stakeholder review. Each subsequent tool in the chain starts from a defined baseline instead of having to infer intent from scratch.

Coming up in Phase 02

Phase 02 covers stakeholder intake — how to structure the conversation before the meeting so you come out with enough to write a good requirement, not just a to-do list. Phase 03 walks through the full AI-ready requirement template and how to tune it for Acme Corp's specific toolchain and compliance environment.

SDLC Context

Where this fits in the full pipeline.

Phase 01 — Requirements. Structured input here cascades through every downstream AI tool in the SDLC.

Phase 01 — Requirements
AI-Powered SDLC pipeline diagram with Phase 01 Requirements highlighted.

Scroll right to see full pipeline →